33 research outputs found

    BRDF Representation and Acquisition

    Get PDF
    Photorealistic rendering of real world environments is important in a range of different areas; including Visual Special effects, Interior/Exterior Modelling, Architectural Modelling, Cultural Heritage, Computer Games and Automotive Design. Currently, rendering systems are able to produce photorealistic simulations of the appearance of many real-world materials. In the real world, viewer perception of objects depends on the lighting and object/material/surface characteristics, the way a surface interacts with the light and on how the light is reflected, scattered, absorbed by the surface and the impact these characteristics have on material appearance. In order to re-produce this, it is necessary to understand how materials interact with light. Thus the representation and acquisition of material models has become such an active research area. This survey of the state-of-the-art of BRDF Representation and Acquisition presents an overview of BRDF (Bidirectional Reflectance Distribution Function) models used to represent surface/material reflection characteristics, and describes current acquisition methods for the capture and rendering of photorealistic materials

    Three perceptual dimensions for specular and diffuse reflection

    Get PDF
    Previous research investigated the perceptual dimensionality of achromatic reflection of opaque surfaces, by using either simple analytic models of reflection, or measured reflection properties of a limited sample of materials. Here we aim to extend this work to a broader range of simulated materials. In a first experiment, we used sparse multidimensional scaling techniques to represent a set of rendered stimuli in a perceptual space that is consistent with participants’ similarity judgments.Participants were presented with one reference object and four comparisons, rendered with different material properties.They were asked to rank the comparisons according to their similarity to the reference, resulting in an efficient collection of a large number of similarity judgments. In order to interpret the space individuated by multidimensional scaling, we ran a second experiment in which observers were asked to rate our experimental stimuli according to a list of 30 adjectives referring to their surface reflectance properties. Our results suggest that perception of achromatic reflection is based on at least three dimensions, which we labelled “Lightness”, “Gloss” and “Metallicity”, in accordance with the rating results. These dimensions are characterized by a relatively simple relationship with the parameters of the physically based rendering model used to generate our stimuli, indicating that they correspond to different physical properties of the rendered materials. Specifically,“Lightness” relates to diffuse reflections, “Gloss” to the presence of high contrast sharp specular highlights and “Metallicity” to spread out specular reflections

    Appearance synthesis of fluorescent objects with mutual illumination effects

    Get PDF
    We propose an approach for the appearance synthesis of objects with matte surfaces made of arbitrary fluorescent materials, accounting for mutual illumination. We solve the problem of rendering realistic scene appearances of objects placed close to each other under different conditions of uniform illumination, viewing direction, and shape, relying on standard physically based rendering and knowledge of the three-dimensional shape and bispectral data of scene objects. The appearance synthesis model suggests that the overall appearance is decomposed into five components, each of which is expanded into a multiplication of spectral functions and shading terms. We show that only two shading terms are required, related to (a) diffuse reflection by direct illumination and (b) interreflection between two matte surfaces. The Mitsuba renderer is used to estimate the reflection components based on the underlying Monte Carlo simulation. The spectral computation of the fluorescent component is performed over a broad wavelength range, including ultraviolet and visible wavelengths. We also address a method for compensating for the difference between the simulated and real images. Experiments were performed to demonstrate the effectiveness of the proposed appearance synthesis approach. The accuracy of the proposed approach was experimentally confirmed using objects with different shapes and fluorescence in the presence of complex mutual illumination effects

    Measurement and rendering of complex non-diffuse and goniochromatic packaging materials

    Get PDF
    Realistic renderings of materials with complex optical properties, such as goniochromatism and non-diffuse reflection, are difficult to achieve. In the context of the print and packaging industries, accurate visualisation of the complex appearance of such materials is a challenge, both for communication and quality control. In this paper, we characterise the bidirectional reflectance of two homogeneous print samples displaying complex optical properties. We demonstrate that in-plane retro-reflective measurements from a single input photograph, along with genetic algorithm-based BRDF fitting, allow to estimate an optimal set of parameters for reflectance models, to use for rendering. While such a minimal set of measurements enables visually satisfactory renderings of the measured materials, we show that a few additional photographs lead to more accurate results, in particular, for samples with goniochromatic appearance

    Woven Fabric Model Creation from a Single Image

    Get PDF
    We present a fast, novel image-based technique, for reverse engineering woven fabrics at a yarn level. These models can be used in a wide range of interior design and visual special effects applications. In order to recover our pseudo-BTF, we estimate the 3D structure and a set of yarn parameters (e.g. yarn width, yarn crossovers) from spatial and frequency domain cues. Drawing inspiration from previous work [Zhao et al. 2012], we solve for the woven fabric pattern, and from this build data set. In contrast however, we use a combination of image space analysis, frequency domain analysis and in challenging cases match image statistics with those from previously captured known patterns. Our method determines, from a single digital image, captured with a DSLR camera under controlled uniform lighting, the woven cloth structure, depth and albedo, thus removing the need for separately measured depth data. The focus of this work is on the rapid acquisition of woven cloth structure and therefore we use standard approaches to render the results.Our pipeline first estimates the weave pattern, yarn characteristics and noise statistics using a novel combination of low level image processing and Fourier Analysis. Next, we estimate a 3D structure for the fabric sample us- ing a first order Markov chain and our estimated noise model as input, also deriving a depth map and an albedo. Our volumetric textile model includes information about the 3D path of the center of the yarns, their variable width and hence the volume occupied by the yarns, and colors.We demonstrate the efficacy of our approach through comparison images of test scenes rendered using: (a) the original photograph, (b) the segmented image, (c) the estimated weave pattern and (d) the rendered result.<br/

    Colour Calibration of a Head Mounted Display for Colour Vision Research Using Virtual Reality

    Get PDF
    Virtual reality (VR) technology ofers vision researchers the opportunity to conduct immersive studies in simulated real-world scenes. However, an accurate colour calibration of the VR head mounted display (HMD), both in terms of luminance and chromaticity, is required to precisely control the presented stimuli. Such a calibration presents signifcant new challenges, for example, due to the large feld of view of the HMD, or the software implementation used for scene rendering, which might alter the colour appearance of objects. Here, we propose a framework for calibrating an HMD using an imaging colorimeter, the I29 (Radiant Vision Systems, Redmond, WA, USA). We examine two scenarios, both with and without using a rendering software for visualisation. In addition, we present a colour constancy experiment design for VR through a gaming engine software, Unreal Engine 4. The colours of the objects of study are chosen according to the previously defned calibration. Results show a high-colour constancy performance among participants, in agreement with recent studies performed on real-world scenarios. Our studies show that our methodology allows us to control and measure the colours presented in the HMD, efectively enabling the use of VR technology for colour vision research

    BRDF representation and acquisition

    Get PDF
    Photorealistic rendering of real world environments is important in a range of different areas; including Visual Special effects, Interior/Exterior Modelling, Architectural Modelling, Cultural Heritage, Computer Games and Automotive Design. Currently, rendering systems are able to produce photorealistic simulations of the appearance of many real-world materials. In the real world, viewer perception of objects depends on the lighting and object/material/surface characteristics, the way a surface interacts with the light and on how the light is reflected, scattered, absorbed by the surface and the impact these characteristics have on material appearance. In order to re-produce this, it is necessary to understand how materials interact with light. Thus the representation and acquisition of material models has become such an active research area. This survey of the state-of-the-art of BRDF Representation and Acquisition presents an overview of BRDF (Bidirectional Reflectance Distribution Function) models used to represent surface/material reflection characteristics, and describes current acquisition methods for the capture and rendering of photorealistic materials

    Woven fabric model creation from a single image

    Get PDF
    We present a fast, novel image-based technique for reverse engineering woven fabrics at a yarn level. These models can be used in a wide range of interior design and visual special effects applications. To recover our pseudo-Bidirectional Texture Function (BTF), we estimate the three-dimensional (3D) structure and a set of yarn parameters (e.g., yarnwidth, yarn crossovers) from spatial and frequency domain cues. Drawing inspiration from previous work [Zhao et al. 2012], we solve for the woven fabric pattern and from this build a dataset. In contrast, however, we use a combination of image space analysis and frequency domain analysis, and, in challenging cases, match image statistics with those from previously captured known patterns. Our method determines, from a single digital image, captured with a digital single-lens reflex (DSLR) camera under controlled uniform lighting, thewoven cloth structure, depth, and albedo, thus removing the need for separately measured depth data. The focus of this work is on the rapid acquisition of woven cloth structure and therefore we use standard approaches to render the results. Our pipeline first estimates the weave pattern, yarn characteristics, and noise statistics using a novel combination of low-level image processing and Fourier analysis. Next, we estimate a 3D structure for the fabric sample using a first-order Markov chain and our estimated noise model as input, also deriving a depth map and an albedo. Our volumetric textile model includes information about the 3D path of the center of the yarns, their variable width, and hence the volume occupied by the yarns, and colors. We demonstrate the efficacy of our approach through comparison images of test scenes rendered using (a) the original photograph, (b) the segmented image, (c) the estimated weave pattern, and (d) the rendered result
    corecore